31 research outputs found

    Conditional Minimum Volume Ellipsoid with Application to Multiclass Discrimination

    Get PDF
    In this paper, we present a new formulation for constructing an n-dimensional ellipsoid by generalizing the computation of the minimum volume covering ellipsoid. The proposed ellipsoid construction is associated with a user-defined parameter β ∈ [0, 1), and formulated as a convex optimization based on the CVaR minimization technique proposed by Rockafellarand Uryasev [15]. An interior point algorithm for the solution is developed by modifying the DRN algorithm of Sun and Freund [19] for the minimum volume covering ellipsoid. By exploiting the solution structure, the associated parametric computation can be performed in an efficient manner. Also, the maximization of the normal likelihood function can be characterized in the context of the proposed ellipsoid construction, and the likelihood maximization can be generalized with parameter β. Motivated by this fact, the new ellipsoid construction is examined through a multiclass discrimination problem. Numerical results are given, showing the nice computational efficiency of the interior point algorithm and the capability of the proposed generalization

    Exact Penalty Method for Knot Selection of B-Spline Regression

    Full text link
    This paper presents a new approach to selecting knots at the same time as estimating the B-spline regression model. Such simultaneous selection of knots and model is not trivial, but our strategy can make it possible by employing a nonconvex regularization on the least square method that is usually applied. More specifically, motivated by the constraint that directly designates (the upper bound of) the number of knots to be used, we present an (unconstrained) regularized least square reformulation, which is later shown to be equivalent to the motivating cardinality-constrained formulation. The obtained formulation is further modified so that we can employ a proximal gradient-type algorithm, known as GIST, for a class of non-smooth non-convex optimization problems. We show that under a mild technical assumption, the algorithm is shown to reach a local minimum of the problem. Since it is shown that any local minimum of the problem satisfies the cardinality constraint, the proposed algorithm can be used to obtain a spline regression model that depends only on a designated number of knots at most. Numerical experiments demonstrate how our approach performs on synthetic and real data sets

    Calibration of Distributionally Robust Empirical Optimization Models

    Full text link
    We study the out-of-sample properties of robust empirical optimization problems with smooth ϕ\phi-divergence penalties and smooth concave objective functions, and develop a theory for data-driven calibration of the non-negative "robustness parameter" δ\delta that controls the size of the deviations from the nominal model. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of ``little bit of robustness" (i.e., δ\delta small, positive) is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that substantial variance (sensitivity) reduction is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods like the bootstrap. Our examples show that robust solutions resulting from "open loop" calibration methods (e.g., selecting a 90%90\% confidence level regardless of the data and objective function) can be very conservative out-of-sample, while those corresponding to the robustness parameter that optimizes an estimate of the out-of-sample expected reward (e.g., via the bootstrap) with no regard for the variance are often insufficiently robust.Comment: 51 page

    A Linear Classification Model Based on Conditional Geometric Score

    No full text
    Abstract. We propose a two-class linear classification model by taking into account the Euclidean distance from each data point to the discriminant hyperplane and introducing a risk measure which is known as the conditional value-at-risk in financial risk management. It is formulated as a nonconvex programming problem and we present a solution method for obtaining either a globally or a locally optimal solution by examining the special structure of the problem. Also, this model is proved to be equivalent to the ν-support vector classification under some parameter setting, and numerical experiments show that the proposed model has better predictive accuracy in general. Key words. classification model, discriminant hyperplane, conditional value-at-risk, nonconvex programming

    Dynamic portfolio selection with linear control policies for coherent risk minimization

    No full text
    This paper is concerned with a linear control policy for dynamic portfolio selection. We develop this policy by incorporating time-series behaviors of asset returns on the basis of coherent risk minimization. Analyzing the dual form of our optimization model, we demonstrate that the investment performance of linear control policies is directly connected to the intertemporal covariance of asset returns. To mitigate overfitting to training data (i.e., historical asset returns), we apply robust optimization. For this optimization, we prove that the worst-case coherent risk measure can be decomposed into the empirical risk measure and the penalty terms. Numerical results demonstrate that when the number of assets is small, linear control policies deliver good out-of-sample investment performance. When the number of assets is large, the penalty terms improve the out-of-sample investment performance

    B-423 Conditional Minimum Volume Ellipsoid with Applications to Subset Selection for MVE Estimator and Multiclass Discrimination

    No full text
    Abstract. The MVE estimator is an important estimator in robust statistics and characterized by an ellipsoid which contains inside 100β percent of given points in IR n and attains the minimal volume, where β ∈ [0.5,1.0) is of usual interest. The associated minimal ellipsoid can be considered as a generalization of the minimum volume covering ellipsoid which covers all the given points since the latter ellipsoid corresponds to the former one with β = 1.0. Though the computation of the minimal covering ellipsoid is tractable, that of the MVE estimator or, equivalently, the associated minimal ellipsoid is cumbersome since it has a nonconvex structure in general. In this paper, we present a new formulation for constructing an ellipsoid which also generalizes the notion of the minimum volume covering ellipsoid on the basis of the CVaR minimization technique which is proposed by Rockafellar and Uryasev [13]. In contrast to computing the MVE estimator, the proposed ellipsoid construction is formulated as a convex optimization and an interior point algorithm for the solution can be developed. In addition, the optimization gives an upper bound of the volume of the ellipsoid associated with the MVE estimator, which fact can be exploited for approximate computations of the estimator
    corecore